Vaccines
The Download: an exclusive chat with Jim O'Neill, and the surprising truth about heists
The Download: an exclusive chat with Jim O'Neill, and the surprising truth about heists Over the past year, Jim O'Neill has become one of the most powerful people in public health. As the US deputy health secretary, he holds two roles at the top of the country's federal health and science agencies. He oversees a department with a budget of over a trillion dollars. And he signed the decision memorandum on the US's deeply controversial new vaccine schedule. In an exclusive interview with earlier this month, O'Neill described his plans to increase human healthspan through longevity-focused research supported by ARPA-H, a federal agency dedicated to biomedical breakthroughs. Fellow longevity enthusiasts said they hope he will bring attention and funding to their cause.
- Asia > China (0.07)
- Oceania > Australia (0.05)
- South America > Chile (0.05)
- (5 more...)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Therapeutic Area > Vaccines (0.71)
- Government > Regional Government > North America Government > United States Government (0.48)
- North America > Canada (0.04)
- Asia > China (0.04)
- North America > United States > Virginia (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Vaccines (0.40)
- Health & Medicine > Therapeutic Area > Immunology (0.40)
Health Department Will Mine Unverified Vaccine Injury Claims With New AI Tool
Experts worry it will be used to further Robert F. Kennedy Jr.'s anti-vaccine agenda. Get your news from a source that's not owned and controlled by oligarchs. The US Department of Health and Human Services (HHS) is developing a generative artificial intelligence tool to find patterns across data reported to a national vaccine monitoring database and to generate hypotheses on the negative effects of vaccines, according to an inventory released last week of all use cases the agency had for AI in 2025. The tool has not yet been deployed, according to the HHS document, and an AI inventory report from the previous year shows that it has been in development since late 2023. But experts worry that the predictions it generates could be used by HHS secretary Robert F. Kennedy Jr. to further his anti-vaccine agenda.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > South Carolina (0.05)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
HHS Is Making an AI Tool to Create Hypotheses About Vaccine Injury Claims
Experts worry Robert F. Kennedy Jr.'s Health Department will use an internal AI tool to analyze vaccine injury claims in a way that furthers his anti-vaccine agenda. The US Department of Health and Human Services is developing a generative artificial intelligence tool to find patterns across data reported to a national vaccine monitoring database and to generate hypotheses on the negative effects of vaccines, according to an inventory released last week of all use cases the agency had for AI in 2025. The tool has not yet been deployed, according to the HHS document, and an AI inventory report from the previous year shows that it has been in development since late 2023. But experts worry that the predictions it generates could be used by Health and Human Services secretary Robert F. Kennedy Jr. to further his anti-vaccine agenda. A long-standing vaccine critic, Kenedy has upended the childhood vaccination schedule in his year in office, removing several shots from a list of recommended immunizations for all children, including those for Covid-19, influenza, hepatitis A and B, meningococcal disease, rotavirus, and respiratory syncytial virus, or RSV.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Asia > China (0.05)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.31)
RFK's Overhauled Autism Committee Is Even Worse Than It Looks
RFK's Overhauled Autism Committee Is Even Worse Than It Looks Kennedy has stacked another HHS panel with his fellow travelers in the anti-vaccine and pseudoscience world. Get your news from a source that's not owned and controlled by oligarchs. Last April, Health and Human Services Secretary Robert F. Kennedy, Jr. promised that his agency would find the cause of autism "by September." That didn't pan out, but this week he appears to be trying again--by stacking a decades-old committee devoted to "innovations in autism research, diagnosis, treatment, and prevention" with his friends and fellow travelers in the anti-vaccine and pseudoscience world. Much like the Centers for Disease Control and Prevention's Advisory Committee on Immunization Practices, which Kennedy overhauled last fall with a full slate of new appointees after firing all the old members, he filled the Interagency Autism Coordinating Committee (IACC), which was first established in 2000 to help set the federal agenda for autism research, with Kennedy's allies in the anti-vaccine movement.
- Oceania > Samoa (0.04)
- North America > United States > Minnesota (0.04)
- North America > United States > Massachusetts (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Autism (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- (2 more...)
Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning Attack
The new paradigm of fine-tuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the fine-tuning to produce an alignment-broken model. We conduct an empirical analysis and uncovera \textit{harmful embedding drift} phenomenon, showing a probable cause of the alignment-broken effect. Inspired by our findings, we propose Vaccine, a perturbation-aware alignment technique to mitigate the security risk of users fine-tuning. The core idea of Vaccine is to produce invariant hidden embeddings by progressively adding crafted perturbation to them in the alignment phase. This enables the embeddings to withstand harmful perturbation from un-sanitized user data in the fine-tuning phase. Our results on open source mainstream LLMs (e.g., Llama2, Opt, Vicuna) demonstrate that Vaccine can boost the robustness of alignment against harmful prompts induced embedding drift while reserving reasoning ability towards benign prompts.
- Health & Medicine > Therapeutic Area > Vaccines (0.81)
- Health & Medicine > Therapeutic Area > Immunology (0.81)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.81)
Diffusion-based Molecule Generation with Informative Prior Bridges
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development. Because the molecules are governed by physical laws, a key challenge is to incorporate prior information into the training procedure to generate high-quality and realistic molecules. We propose a simple and novel approach to steer the training of diffusion-based generative models with physical and statistics prior information. This is achieved by constructing physically informed diffusion bridges, stochastic processes that guarantee to yield a given observation at the fixed terminal time. We develop a Lyapunov function based method to construct and determine bridges, and propose a number of proposals of informative prior bridges for both high-quality molecule generation and uniformity-promoted 3D point cloud generation. With comprehensive experiments, we show that our method provides a powerful approach to the 3D generation task, yielding molecule structures with better quality and stability scores and more uniformly distributed point clouds of high qualities.
- Health & Medicine > Pharmaceuticals & Biotechnology (0.98)
- Health & Medicine > Therapeutic Area > Vaccines (0.60)
- Health & Medicine > Therapeutic Area > Immunology (0.60)
Modern Hopfield Networks and Attention for Immune Repertoire Classification
A central mechanism in machine learning is to identify, store, and recognize patterns. How to learn, access, and retrieve such patterns is crucial in Hopfield networks and the more recent transformer architectures. We show that the attention mechanism of transformer architectures is actually the update rule of modern Hopfield networks that can store exponentially many patterns. We exploit this high storage capacity of modern Hopfield networks to solve a challenging multiple instance learning (MIL) problem in computational biology: immune repertoire classification. In immune repertoire classification, a vast number of immune receptors are used to predict the immune status of an individual. This constitutes a MIL problem with an unprecedentedly massive number of instances, two orders of magnitude larger than currently considered problems, and with an extremely low witness rate. Accurate and interpretable machine learning methods solving this problem could pave the way towards new vaccines and therapies, which is currently a very relevant research topic intensified by the COVID-19 crisis.
- Health & Medicine > Therapeutic Area > Vaccines (0.59)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.59)
- Health & Medicine > Therapeutic Area > Immunology (0.59)
A Reproducible Framework for Neural Topic Modeling in Focus Group Analysis
Arfaoui, Heger, Hergli, Mohammed Iheb, Benzina, Beya, BenMiled, Slimane
Focus group discussions generate rich qualitative data but their analysis traditionally relies on labor-intensive manual coding that limits scalability and reproducibility. We present a systematic framework for applying BERTopic to focus group transcripts using data from ten focus groups exploring HPV vaccine perceptions in Tunisia (1,075 utterances). We conducted comprehensive hyperparameter exploration across 27 configurations, evaluating each through bootstrap stability analysis, performance metrics, and comparison with LDA baseline. Bootstrap analysis revealed that stability metrics (NMI and ARI) exhibited strong disagreement (r = -0.691) and showed divergent relationships with coherence, demonstrating that stability is multifaceted rather than monolithic. Our multi-criteria selection framework yielded a 7-topic model achieving 18\% higher coherence than optimized LDA (0.573 vs. 0.486) with interpretable topics validated through independent human evaluation (ICC = 0.700, weighted Cohen's kappa = 0.678). These findings demonstrate that transformer-based topic modeling can extract interpretable themes from small focus group transcript corpora when systematically configured and validated, while revealing that quality metrics capture distinct, sometimes conflicting constructs requiring multi-criteria evaluation. We provide complete documentation and code to support reproducibility.
- Africa > Middle East > Tunisia > Tunis Governorate > Tunis (0.05)
- Asia > Middle East > Jordan (0.04)
- Europe > Switzerland (0.04)
- Europe > Poland (0.04)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
Apertus: Democratizing Open and Compliant LLMs for Global Language Environments
Apertus, Project, Hernández-Cano, Alejandro, Hägele, Alexander, Huang, Allen Hao, Romanou, Angelika, Solergibert, Antoni-Joan, Pasztor, Barna, Messmer, Bettina, Garbaya, Dhia, Ďurech, Eduard Frank, Hakimi, Ido, Giraldo, Juan García, Ismayilzada, Mete, Foroutan, Negar, Moalla, Skander, Chen, Tiancheng, Sabolčec, Vinko, Xu, Yixuan, Aerni, Michael, AlKhamissi, Badr, Mariñas, Inés Altemir, Amani, Mohammad Hossein, Ansaripour, Matin, Badanin, Ilia, Benoit, Harold, Boros, Emanuela, Browning, Nicholas, Bösch, Fabian, Böther, Maximilian, Canova, Niklas, Challier, Camille, Charmillot, Clement, Coles, Jonathan, Deriu, Jan, Devos, Arnout, Drescher, Lukas, Dzenhaliou, Daniil, Ehrmann, Maud, Fan, Dongyang, Fan, Simin, Gao, Silin, Gila, Miguel, Grandury, María, Hashemi, Diba, Hoyle, Alexander, Jiang, Jiaming, Klein, Mark, Kucharavy, Andrei, Kucherenko, Anastasiia, Lübeck, Frederike, Machacek, Roman, Manitaras, Theofilos, Marfurt, Andreas, Matoba, Kyle, Matrenok, Simon, Mendonça, Henrique, Mohamed, Fawzi Roberto, Montariol, Syrielle, Mouchel, Luca, Najem-Meyer, Sven, Ni, Jingwei, Oliva, Gennaro, Pagliardini, Matteo, Palme, Elia, Panferov, Andrei, Paoletti, Léo, Passerini, Marco, Pavlov, Ivan, Poiroux, Auguste, Ponkshe, Kaustubh, Ranchin, Nathan, Rando, Javi, Sauser, Mathieu, Saydaliev, Jakhongir, Sayfiddinov, Muhammad Ali, Schneider, Marian, Schuppli, Stefano, Scialanga, Marco, Semenov, Andrei, Shridhar, Kumar, Singhal, Raghav, Sotnikova, Anna, Sternfeld, Alexander, Tarun, Ayush Kumar, Teiletche, Paul, Vamvas, Jannis, Yao, Xiaozhe, Zhao, Hao, Ilic, Alexander, Klimovic, Ana, Krause, Andreas, Gulcehre, Caglar, Rosenthal, David, Ash, Elliott, Tramèr, Florian, VandeVondele, Joost, Veraldi, Livio, Rajman, Martin, Schulthess, Thomas, Hoefler, Torsten, Bosselut, Antoine, Jaggi, Martin, Schlag, Imanol
We present Apertus, a fully open suite of large language models (LLMs) designed to address two systemic shortcomings in today's open model ecosystem: data compliance and multilingual representation. Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting `robots.txt` exclusions and filtering for non-permissive, toxic, and personally identifiable content. To mitigate risks of memorization, we adopt the Goldfish objective during pretraining, strongly suppressing verbatim recall of data while retaining downstream task performance. The Apertus models also expand multilingual coverage, training on 15T tokens from over 1800 languages, with ~40% of pretraining data allocated to non-English content. Released at 8B and 70B scales, Apertus approaches state-of-the-art results among fully open models on multilingual benchmarks, rivalling or surpassing open-weight counterparts. Beyond model weights, we release all scientific artifacts from our development cycle with a permissive license, including data preparation scripts, checkpoints, evaluation suites, and training code, enabling transparent audit and extension.
- Europe > Austria > Vienna (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Jordan (0.04)
- (30 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.67)